50 research outputs found

    Brain-Machine Interfaces for Real-time Speech Synthesis

    Get PDF
    This paper reports on studies involving brain-machine interfaces (BMIs) that provide near-instantaneous audio feedback from a speech synthesizer to the BMI user. In one study, neural signals recorded by an intracranial electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome were transmitted wirelessly across the scalp and used to drive a formant synthesizer, allowing the user to produce vowels. In a second, pilot study, a neurologically normal user was able to drive the formant synthesizer with imagined movements detected using electroencephalography. Our results support the feasibility of neural prostheses that have the potential to provide near-conversational synthetic speech for individuals with severely impaired speech output

    A screening protocol incorporating brain-computer interface feature matching considerations for augmentative and alternative communication

    Get PDF
    Purpose: The use of standardized screening protocols may inform brain-computer interface (BCI) research procedures to help maximize BCI performance outcomes and provide foundational information for clinical translation. Therefore, in this study we developed and evaluated a new BCI screening protocol incorporating cognitive, sensory, motor and motor imagery tasks. Methods: Following development, BCI screener outcomes were compared to the Amyotrophic Lateral Sclerosis Cognitive Behavioral Screen (ALS-CBS), and ALS Functional Rating Scale (ALS-FRS) for twelve individuals with a neuromotor disorder. Results: Scores on the cognitive portion of the BCI screener demonstrated limited variability, indicating all participants possessed core BCI-related skills. When compared to the ALS-CBS, the BCI screener was able to modestly discriminate possible cognitive difficulties that are likely to influence BCI performance. In addition, correlations between the motor imagery section of the screener and ALS-CBS and ALS-FRS were non-significant, suggesting the BCI screener may provide information not captured on other assessment tools. Additional differences were found between motor imagery tasks, with greater self-ratings on first-person explicit imagery of familiar tasks compared to unfamiliar/ generic BCI tasks. Conclusion: The BCI screener captures factors likely relevant for BCI, which has value for guiding person-centered BCI assessment across different devices to help inform BCI trials. Includes supplemental data

    Guidelines for Feature Matching Assessment of Brain–Computer Interfaces for Augmentative and Alternative Communication

    Get PDF
    Purpose--Brain–computer interfaces (BCIs) can provide access to augmentative and alternative communication (AAC) devices using neurological activity alone without voluntary movements. As with traditional AAC access methods, BCI performance may be influenced by the cognitive–sensory–motor and motor imagery profiles of those who use these devices. Therefore, we propose a person-centered, feature matching framework consistent with clinical AAC best practices to ensure selection of the most appropriate BCI technology to meet individuals\u27 communication needs. Method--The proposed feature matching procedure is based on the current state of the art in BCI technology and published reports on cognitive, sensory, motor, and motor imagery factors important for successful operation of BCI devices. Results--Considerations for successful selection of BCI for accessing AAC are summarized based on interpretation from a multidisciplinary team with experience in AAC, BCI, neuromotor disorders, and cognitive assessment. The set of features that support each BCI option are discussed in a hypothetical case format to model possible transition of BCI research from the laboratory into clinical AAC applications. Conclusions--This procedure is an initial step toward consideration of feature matching assessment for the full range of BCI devices. Future investigations are needed to fully examine how person-centered factors influence BCI performance across devices

    Motor-Induced Suppression of the N100 Event-Related Potential During Motor Imagery Control of a Speech Synthesizer Brain–Computer Interface

    Get PDF
    Purpose: Speech motor control relies on neural processes for generating sensory expectations using an efference copy mechanism to maintain accurate productions. The N100 auditory event-related potential (ERP) has been identified as a possible neural marker of the efference copy with a reduced amplitude during active listening while speaking when compared to passive listening. This study investigates N100 suppression while controlling a motor imagery speech synthesizer brain–computer interface (BCI) with instantaneous auditory feedback to determine whether similar mechanisms are used for monitoring BCI-based speech output that may both support BCI learning through existing speech motor networks and be used as a clinical marker for the speech network integrity in individuals without severe speech and physical impairments. Method: The motor-induced N100 suppression is examined based on data from 10 participants who controlled a BCI speech synthesizer using limb motor imagery. We considered listening to auditory target stimuli (without motor imagery) in the BCI study as passive listening and listening to BCI-controlled speech output (with motor imagery) as active listening since audio output depends on imagined movements. The resulting ERP was assessed for statistical significance using a mixed-effects general linear model. Results: Statistically significant N100 ERP amplitude differences were observed between active and passive listening during the BCI task. Post hoc analyses confirm the N100 amplitude was suppressed during active listening. Conclusion: Observation of the N100 suppression suggests motor planning brain networks are active as participants control the BCI synthesizer, which may aid speech BCI mastery

    Development of speech prostheses: current status and recent advances

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis in Expert Review of Medical Devices on September, 2010, available online: http://www.tandfonline.com/10.1586/erd.10.34.Brain–computer interfaces (BCIs) have been developed over the past decade to restore communication to persons with severe paralysis. In the most severe cases of paralysis, known as locked-in syndrome, patients retain cognition and sensation, but are capable of only slight voluntary eye movements. For these patients, no standard communication method is available, although some can use BCIs to communicate by selecting letters or words on a computer. Recent research has sought to improve on existing techniques by using BCIs to create a direct prediction of speech utterances rather than to simply control a spelling device. Such methods are the first steps towards speech prostheses as they are intended to entirely replace the vocal apparatus of paralyzed users. This article outlines many well known methods for restoration of communication by BCI and illustrates the difference between spelling devices and direct speech prediction or speech prosthesis

    A Noninvasive Brain-Computer Interface for Real-Time Speech Synthesis: The Importance of Multimodal Feedback.

    Get PDF
    We conducted a study of a motor imagery brain-computer interface (BCI) using electroencephalography to continuously control a formant frequency speech synthesizer with instantaneous auditory and visual feedback. Over a three-session training period, sixteen participants learned to control the BCI for production of three vowel sounds (/ textipa i/ [heed], / textipa A/ [hot], and / textipa u/ [who'd]) and were split into three groups: those receiving unimodal auditory feedback of synthesized speech, those receiving unimodal visual feedback of formant frequencies, and those receiving multimodal, audio-visual (AV) feedback. Audio feedback was provided by a formant frequency artificial speech synthesizer, and visual feedback was given as a 2-D cursor on a graphical representation of the plane defined by the first two formant frequencies. We found that combined AV feedback led to the greatest performance in terms of percent accuracy, distance to target, and movement time to target compared with either unimodal feedback of auditory or visual information. These results indicate that performance is enhanced when multimodal feedback is meaningful for the BCI task goals, rather than as a generic biofeedback signal of BCI progress

    Examining sensory ability, feature matching, and assessment-based adaptation for a brain-computer interface using the steady-state visually evoked potential

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis in Disability and Rehabilitation: Assistive Technology on 01/31/2018, available online: http://www.tandfonline.com/10.1080/17483107.2018.1428369.PURPOSE:We investigated how overt visual attention and oculomotor control influence successful use of a visual feedback brain-computer interface (BCI) for accessing augmentative and alternative communication (AAC) devices in a heterogeneous population of individuals with profound neuromotor impairments. BCIs are often tested within a single patient population limiting generalization of results. This study focuses on examining individual sensory abilities with an eye toward possible interface adaptations to improve device performance. METHODS: Five individuals with a range of neuromotor disorders participated in four-choice BCI control task involving the steady state visually evoked potential. The BCI graphical interface was designed to simulate a commercial AAC device to examine whether an integrated device could be used successfully by individuals with neuromotor impairment. RESULTS: All participants were able to interact with the BCI and highest performance was found for participants able to employ an overt visual attention strategy. For participants with visual deficits to due to impaired oculomotor control, effective performance increased after accounting for mismatches between the graphical layout and participant visual capabilities. CONCLUSION: As BCIs are translated from research environments to clinical applications, the assessment of BCI-related skills will help facilitate proper device selection and provide individuals who use BCI the greatest likelihood of immediate and long term communicative success. Overall, our results indicate that adaptations can be an effective strategy to reduce barriers and increase access to BCI technology. These efforts should be directed by comprehensive assessments for matching individuals to the most appropriate device to support their complex communication needs. Implications for Rehabilitation Brain computer interfaces using the steady state visually evoked potential can be integrated with an augmentative and alternative communication device to provide access to language and literacy for individuals with neuromotor impairment. Comprehensive assessments are needed to fully understand the sensory, motor, and cognitive abilities of individuals who may use brain-computer interfaces for proper feature matching as selection of the most appropriate device including optimization device layouts and control paradigms. Oculomotor impairments negatively impact brain-computer interfaces that use the steady state visually evoked potential, but modifications to place interface stimuli and communication items in the intact visual field can improve successful outcomes

    The Unlock Project: A Python-based framework for practical brain-computer interface communication “app” development

    Get PDF
    In this paper we present a framework for reducing the development time needed for creating applications for use in non-invasive brain-computer interfaces (BCI). Our framework is primarily focused on facilitating rapid software “app” development akin to current efforts in consumer portable computing (e.g. smart phones and tablets). This is accomplished by handling intermodule communication without direct user or developer implementation, instead relying on a core subsystem for communication of standard, internal data formats. We also provide a library of hardware interfaces for common mobile EEG platforms for immediate use in BCI applications. A use-case example is described in which a user with amyotrophic lateral sclerosis participated in an electroencephalography-based BCI protocol developed using the proposed framework. We show that our software environment is capable of running in real-time with updates occurring 50–60 times per second with limited computational overhead (5 ms system lag) while providing accurate data acquisition and signal analysis

    Brain-Computer Interfaces for Speech Communication

    Get PDF
    This paper briefly reviews current silent speech methodologies for normal and disabled individuals. Current techniques utilizing electromyographic (EMG) recordings of vocal tract movements are useful for physically healthy individuals but fail for tetraplegic individuals who do not have accurate voluntary control over the speech articulators. Alternative methods utilizing EMG from other body parts (e.g., hand, arm, or facial muscles) or electroencephalography (EEG) can provide capable silent communication to severely paralyzed users, though current interfaces are extremely slow relative to normal conversation rates and require constant attention to a computer screen that provides visual feedback and/or cueing. We present a novel approach to the problem of silent speech via an intracortical microelectrode brain computer interface (BCI) to predict intended speech information directly from the activity of neurons involved in speech production. The predicted speech is synthesized and acoustically fed back to the user with a delay under 50 ms. We demonstrate that the Neurotrophic Electrode used in the BCI is capable of providing useful neural recordings for over 4 years, a necessary property for BCIs that need to remain viable over the lifespan of the user. Other design considerations include neural decoding techniques based on previous research involving BCIs for computer cursor or robotic arm control via prediction of intended movement kinematics from motor cortical signals in monkeys and humans. Initial results from a study of continuous speech production with instantaneous acoustic feedback show the BCI user was able to improve his control over an artificial speech synthesizer both within and across recording sessions. The success of this initial trial validates the potential of the intracortical microelectrode-based approach for providing a speech prosthesis that can allow much more rapid communication rates

    Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex

    Get PDF
    This is the published version, also available here: http://dx.doi.org/10.3389/fnins.2011.00065.We conducted a neurophysiological study of attempted speech production in a paralyzed human volunteer using chronic microelectrode recordings. The volunteer suffers from locked-in syndrome leaving him in a state of near-total paralysis, though he maintains good cognition and sensation. In this study, we investigated the feasibility of supervised classification techniques for prediction of intended phoneme production in the absence of any overt movements including speech. Such classification or decoding ability has the potential to greatly improve the quality-of-life of many people who are otherwise unable to speak by providing a direct communicative link to the general community. We examined the performance of three classifiers on a multi-class discrimination problem in which the items were 38 American English phonemes including monophthong and diphthong vowels and consonants. The three classifiers differed in performance, but averaged between 16 and 21% overall accuracy (chance-level is 1/38 or 2.6%). Further, the distribution of phonemes classified statistically above chance was non-uniform though 20 of 38 phonemes were classified with statistical significance for all three classifiers. These preliminary results suggest supervised classification techniques are capable of performing large scale multi-class discrimination for attempted speech production and may provide the basis for future communication prostheses
    corecore